Diffractive optical networks provide rich opportunities for visual computing tasks since the spatial information of a scene can be directly accessed by a diffractive processor without requiring any digital pre-processing steps. Here we present data class-specific transformations all-optically performed between the input and output fields-of-view (FOVs) of a diffractive network. The visual information of the objects is encoded into the amplitude (A), phase (P), or intensity (I) of the optical field at the input, which is all-optically processed by a data class-specific diffractive network. At the output, an image sensor-array directly measures the transformed patterns, all-optically encrypted using the transformation matrices pre-assigned to different data classes, i.e., a separate matrix for each data class. The original input images can be recovered by applying the correct decryption key (the inverse transformation) corresponding to the matching data class, while applying any other key will lead to loss of information. The class-specificity of these all-optical diffractive transformations creates opportunities where different keys can be distributed to different users; each user can only decode the acquired images of only one data class, serving multiple users in an all-optically encrypted manner. We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets. We also experimentally validated the feasibility of this framework by fabricating a class-specific I-->I transformation diffractive network using two-photon polymerization and successfully tested it at 1550 nm wavelength. Data class-specific all-optical transformations provide a fast and energy-efficient method for image and data encryption, enhancing data security and privacy.
translated by 谷歌翻译
Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.
translated by 谷歌翻译
置换矩阵构成了一个重要的计算构建块,这些构建块在各个领域中经常使用,例如通信,信息安全和数据处理。具有相对较大数量的基于功率,快速和紧凑型平台的输入输出互连的置换运算符的光学实现是非常可取的。在这里,我们提出了通过深度学习设计的衍射光学网络,以全面执行置换操作,可以使用被动的传播层在输入和视场之间扩展到数十万个互连,这些互连是在波长规模上单独构造的。 。我们的发现表明,衍射光网络在近似给定置换操作中的容量与系统中衍射层和可训练的传输元件的数量成正比。这种更深的衍射网络设计可以在系统的物理对齐和输出衍射效率方面构成实际挑战。我们通过设计不对对准的衍射设计来解决这些挑战,这些设计可以全面执行任意选择的置换操作,并首次在实验中证明了在频谱的THZ部分运行的衍射排列网络。衍射排列网络可能会在例如安全性,图像加密和数据处理以及电信中找到各种应用程序;尤其是在无线通信中的载波频率接近THZ波段的情况下,提出的衍射置换网络可以潜在地充当无线网络中的通道路由和互连面板。
translated by 谷歌翻译
波前调节器的限制空间散宽产品(SBP)阻碍了大型视野(FOV)上图像的高分辨率合成/投影。我们报告了一种深度学习的衍射显示设计,该设计基于一对训练的电子编码器和衍射光学解码器,用于合成/项目超级分辨图像,使用低分辨率波形调节器。由训练有素的卷积神经网络(CNN)组成的数字编码器迅速预处理了感兴趣的高分辨率图像,因此它们的空间信息被编码为低分辨率(LR)调制模式,该模式通过低SBP Wavefront调制器投影。衍射解码器使用薄的传播层处理该LR编码的信息,这些层是使用深度学习构成的,以在其输出FOV处进行全面合成和项目超级分辨图像。我们的结果表明,这种衍射图像显示可以达到〜4的超分辨率因子,表明SBP增加了约16倍。我们还使用3D打印的衍射解码器在THZ光谱上进行实验验证了这种衍射超分辨率显示器的成功。该衍射图像解码器可以缩放以在可见的波长下运行,并激发紧凑,低功率和计算效率的大型FOV和高分辨率显示器的设计。
translated by 谷歌翻译
由于其潜在的优势,如可扩展性,低延迟和功率效率,光学计算在过去几十年中已经看到了快速进步。潜在的全光处理器的核心单元将是NAND门,其可以级联以执行任意逻辑操作。在这里,我们使用衍射神经网络呈现可级可级联的全光NAND门的设计和分析。我们使用两个空间分离的孔的相对光功率在衍射NAND门的输入和输出平面上进行了编码的逻辑值。基于该架构,我们使用光的衍射来了数值优化了由4个无源层组成的衍射神经网络的设计,并通过光的衍射来实现这些衍射NAND操作,通过连续地馈送输出来级联这些衍射NAND门来执行复杂的逻辑功能一个衍射NAND门进入另一个。我们通过使用相同的衍射设计来显示我们的衍射NAND门的级联性,以及全光学执行和或操作以及半加法器。由空间工程化无源衍射层组成的可级可抵级光学NAND门可以用作各种光学计算平台的核心组件。
translated by 谷歌翻译
This study focuses on improving the optical character recognition (OCR) data for panels in the COMICS dataset, the largest dataset containing text and images from comic books. To do this, we developed a pipeline for OCR processing and labeling of comic books and created the first text detection and recognition datasets for western comics, called "COMICS Text+: Detection" and "COMICS Text+: Recognition". We evaluated the performance of state-of-the-art text detection and recognition models on these datasets and found significant improvement in word accuracy and normalized edit distance compared to the text in COMICS. We also created a new dataset called "COMICS Text+", which contains the extracted text from the textboxes in the COMICS dataset. Using the improved text data of COMICS Text+ in the comics processing model from resulted in state-of-the-art performance on cloze-style tasks without changing the model architecture. The COMICS Text+ dataset can be a valuable resource for researchers working on tasks including text detection, recognition, and high-level processing of comics, such as narrative understanding, character relations, and story generation. All the data and inference instructions can be accessed in https://github.com/gsoykan/comics_text_plus.
translated by 谷歌翻译
Recent advances in distributed artificial intelligence (AI) have led to tremendous breakthroughs in various communication services, from fault-tolerant factory automation to smart cities. When distributed learning is run over a set of wirelessly connected devices, random channel fluctuations and the incumbent services running on the same network impact the performance of both distributed learning and the coexisting service. In this paper, we investigate a mixed service scenario where distributed AI workflow and ultra-reliable low latency communication (URLLC) services run concurrently over a network. Consequently, we propose a risk sensitivity-based formulation for device selection to minimize the AI training delays during its convergence period while ensuring that the operational requirements of the URLLC service are met. To address this challenging coexistence problem, we transform it into a deep reinforcement learning problem and address it via a framework based on soft actor-critic algorithm. We evaluate our solution with a realistic and 3GPP-compliant simulator for factory automation use cases. Our simulation results confirm that our solution can significantly decrease the training delay of the distributed AI service while keeping the URLLC availability above its required threshold and close to the scenario where URLLC solely consumes all network resources.
translated by 谷歌翻译
Privacy-preserving inference via edge or encrypted computing paradigms encourages users of machine learning services to confidentially run a model on their personal data for a target task and only share the model's outputs with the service provider; e.g., to activate further services. Nevertheless, despite all confidentiality efforts, we show that a ''vicious'' service provider can approximately reconstruct its users' personal data by observing only the model's outputs, while keeping the target utility of the model very close to that of a ''honest'' service provider. We show the possibility of jointly training a target model (to be run at users' side) and an attack model for data reconstruction (to be secretly used at server's side). We introduce the ''reconstruction risk'': a new measure for assessing the quality of reconstructed data that better captures the privacy risk of such attacks. Experimental results on 6 benchmark datasets show that for low-complexity data types, or for tasks with larger number of classes, a user's personal data can be approximately reconstructed from the outputs of a single target inference task. We propose a potential defense mechanism that helps to distinguish vicious vs. honest classifiers at inference time. We conclude this paper by discussing current challenges and open directions for future studies. We open-source our code and results, as a benchmark for future work.
translated by 谷歌翻译
Federated learning (FL) is a promising approach to enable the future Internet of vehicles consisting of intelligent connected vehicles (ICVs) with powerful sensing, computing and communication capabilities. We consider a base station (BS) coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner, in order to limit data traffic and privacy leakage. However, due to the mobility of vehicles, the connections between the BS and ICVs are short-lived, which affects the resource utilization of ICVs, and thus, the convergence speed of the training process. In this paper, we propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL. We propose a mobility-aware optimization algorithm called MOB-FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections, so as to increase the convergence speed. Simulation results based on the beam selection and the trajectory prediction tasks verify the effectiveness of the proposed solution.
translated by 谷歌翻译
This technical report presents GPS++, the first-place solution to the Open Graph Benchmark Large-Scale Challenge (OGB-LSC 2022) for the PCQM4Mv2 molecular property prediction task. Our approach implements several key principles from the prior literature. At its core our GPS++ method is a hybrid MPNN/Transformer model that incorporates 3D atom positions and an auxiliary denoising task. The effectiveness of GPS++ is demonstrated by achieving 0.0719 mean absolute error on the independent test-challenge PCQM4Mv2 split. Thanks to Graphcore IPU acceleration, GPS++ scales to deep architectures (16 layers), training at 3 minutes per epoch, and large ensemble (112 models), completing the final predictions in 1 hour 32 minutes, well under the 4 hour inference budget allocated. Our implementation is publicly available at: https://github.com/graphcore/ogb-lsc-pcqm4mv2.
translated by 谷歌翻译